Approximating Markov chains.

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On Approximating the Stationary Distribution of Time-reversible Markov Chains

Approximating the stationary probability of a state in a Markov chain through Markov chain Monte Carlo techniques is, in general, inefficient. Standard random walk approaches require Õ(τ/π(v)) operations to approximate the probability π(v) of a state v in a chain with mixing time τ , and even the best available techniques still have complexity Õ(τ/π(v)); and since these complexities depend inve...

متن کامل

Empirical Bayes Estimation in Nonstationary Markov chains

Estimation procedures for nonstationary Markov chains appear to be relatively sparse. This work introduces empirical  Bayes estimators  for the transition probability  matrix of a finite nonstationary  Markov chain. The data are assumed to be of  a panel study type in which each data set consists of a sequence of observations on N>=2 independent and identically dis...

متن کامل

Approximating labelled Markov processes

Labelled Markov processes are probabilistic versions of labelled transition systems. In general, the state space of a labelled Markov process may be a continuum. In this paper, we study approximation techniques for continuous-state labelled Markov processes. We show that the collection of labelled Markov processes carries a Polish-space structure with a countable basis given by finite-state Mar...

متن کامل

Approximating Labeled Markov Processes

We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities. The primary technical tools that we d...

متن کامل

Markov chains

[Tip: Study the MC, QT, and Little's law lectures together: CTMC (MC lecture), M/M/1 queue (QT lecture), Little's law lecture (when deriving the mean response time from mean number of customers), DTMC (MC lecture), M/M/1 queue derivation using DTMC analysis, derive distribution of response time in M/M/1 queue (QT lecture), relation between Markov property and mem-oryless property (MC lecture), ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the National Academy of Sciences

سال: 1992

ISSN: 0027-8424,1091-6490

DOI: 10.1073/pnas.89.10.4432